Platform Solutions Home
Developer Home Contents Search Feedback Support Intel(r)
Archive
Visual Computer banner
  Contents:
  1. Overview
  2. Economic Challenges
  3. Atomic Challenges
  4. What to Integrate
  5. Managing Complexity
  6. Design Validation
  7. Interconnection Delays
  8. Power Dissipation
  9. Lithography Limits
  10. Micro 2012
  11. Impact on the PC platform
  12. New PC Platform Usage Models
  13. Lower Cost Peripherals
  14. Summary
Visual Computing banner
Overview
Intel already had a robust track record with microprocessor development when IBM* chose the 8088 processor as the heart of the 1981 IBM PC. The 8088 was a 16-bit, third-generation microprocessor that followed "Moore's Law" as shown in the left of Figure 1. Gordon Moore made his first observation about the "doubling of transistor density on a manufactured die every year" in 1965, just six years after he invented the planar transistor and four years after he and Bob Noyce produced the first planar integrated circuit. Gordon admits that, initially, he did not expect his law to still be true some 30 years later, but he is now confident that it will be true for another 20 years as shown in Figure 1.

  Figure 1: Moore's Law Spans 45 Years. (Click here for a higher resolution jpeg image.)

Economic Challenges
The economic result of Intel's relentless pursuit of Moore's Law is actually more remarkable. The average price of a transistor has fallen by six orders of magnitude due to microprocessor development. This is unprecedented in world history; no other manufactured item has decreased in cost so far, so fast. It is interesting, therefore, to look into this silicon technology to understand its scope and to be able to predict its future.

Intel's microprocessors, and many other integrated circuits, are manufactured using a planar process where a pure silicon wafer is selectively masked and diffused with chemicals to make multiple transistors. This combination is then selectively masked again and metal is deposited on the wafer to interconnect these transistors. The first integrated circuit used only one layer of metal; today's Pentium® II processor uses five layers of metal to increase the packing density as shown in Figure 2.

  Figure 2: Cross-Section of Five-Layer Metal. (Click here for a higher resolution jpeg image.)

Transistor density is the key price driver of integrated circuits. The area on a wafer is a fixed cost (about $1B/acre) so product complexity and price will depend upon how small the transistors can be made. Making transistors smaller is "all benefits": smaller transistors operate faster, interconnections are smaller, system reliability is increased since more functions are integrated into a single place, the power is lower and the cost is cheaper. Everything gets better and there are no real engineering tradeoffs to make. The results that Intel has achieved in its microprocessor development are shown in Figure 3. Notice how the die from each processor gets progressively smaller with each new silicon process and notice how the next generation processor is then larger with more transistors on it. The cycle then repeats: smaller, faster, and cheaper. The market demand for higher performance Intel processors is growing exponentially, so making them smaller also allows us to meet customer demand.

  Figure 3: Intel Microprocessor Development. (Click here for a higher resolution jpeg image.)

Intel's challenge in making transistors smaller consumes a large R&D budget and results in new factories being built and retooled for each new generation of microprocessor. The manufacture of processors in large volumes is a capital-intensive business which now costs over $3B/year. There have been dramatic changes in manufacturing to meet today's demand. In the 1970s the technicians wore smocks and used tweezers to move the wafers from one process step to the next. A fab plant today is quite different: The technicians wear self-contained bunny suits (the real ones are white, they turn shades of metallic when put close to a marketing person!) and they work in an extremely clean environment (over 100 times cleaner than the best operating theater) with robots moving large numbers of wafers between the multiple processing steps.

Atomic Challenges
So where do we go from here, and what are the challenges looking forward? Today we have the ability to integrate a doubling transistor count every 18-24 months. The challenge is deciding what to integrate, how to manage the increased complexity and how to validate the design. We must also consider interconnection delays, increasing power dissipation due to the large transistor count, and the shorter wavelengths of light used in the photolithography process due to the smallness of the devices. All of these challenges are solvable with money and effort. One challenge, however, that will be difficult for even Intel to solve, is the fact that when transistors get smaller the gate thickness starts to approach the atomic nature of matter! It is postulated that the thinnest gate cannot be less than ten atoms thick and at our current rate we should reach this by the year 2017. Not a major concern today, but something that Intel has a research team working on.

What to Integrate
There are many computer architecture techniques that can be employed inside the processor core to increase its performance. Fundamental techniques such as cache memory and a floating point unit were added to the processor core early on. Advanced techniques such as pipelining and multiple instruction execution were added with the Pentium® processor. Leading-edge architectural techniques such as speculative execution and data-readiness, run-time instruction scheduling were added with the Pentium Pro processor. Computer science is now looking foreward with a changed paradigm: Don't be constrained by the hardware implications of an approach; we'll have the technology within a short time to implement it.

Transistors within the processor core communicate with each other at extremely high speed—the Pentium II processor is at 300MHz today and this core frequency will continue to increase in 1998. When communicating with transistors on other devices the signaling must go via the relatively slow component-to-component interconnect. This has the potential of reducing system performance. A simple solution is to integrate more of the system hardware functions into the processor core. This must be traded off with the cost effectiveness of implementing the system function in software or in dedicated hardware or a combination of the two. This hardware/software balance will be continually optimized as we move into higher performance, future processor generations.

Managing Complexity
The increase in transistor density has caused a similar increase in engineering staff. There are now many hundreds of architects and design engineers working at numerous Intel sites all over the world on next-generation processor implementations. The project management challenges to keep such a large team focused and productive requires a sophisticated computer network with advanced workstations and development tools. It is interesting to note that only the very latest generation of Pentium II processor based machines has enough horsepower to be used in creating the next generation processor! And so the cycle continues.

Design Validation
Design validation and verification is now an integrated part of the design process; in the early days, it was done after the processor was designed. Intel has approximately the same number of engineers testing and validating its processor designs as those architecting and designing. Additional circuitry is added to the processor core to ease the debug and validation process. Test coverage models are generated so that each and every logical function can be thoroughly tested. Additionally, legacy software tests are run on the computer model to ensure software compatibility with previous processor generations.

Interconnection Delays
As the geometries of a transistor get smaller, the propagation delays through the device also get smaller. Unfortunately, as the metalized interconnects get smaller, their resistance and capacitance increases and therefore the propagation delay through them increases. As we move to 0.25micron geometries the delay through the metal is more than the delay through the transistor—not at all what we learned in engineering school! Intel has multiple approaches to this challenge—the most obvious being to use a metal with a higher conductivity than the aluminum currently used in production. Copper offers some attractive solutions but this is more difficult to process. Alternative design techniques that use more transistors and less interconnect are also being investigated.

Power Dissipation
As the transistor count goes up then so does the power dissipation. More important, however, is the corresponding increase in frequency, since power increases as the square of the frequency. The only variable that is changeable in the power equation is supply voltage—power dissipation is also proportional to the square of the voltage. So we must reduce voltage as we move forward into the future. The "same-power" voltage for an 0.18micron process is 0.5 volt, compared with 3.3 volts today. But operating at 0.5 volt will present new challenges, since a few millivolts of supply variation or noise across the processor core will now be a significant percentage of transistor switching voltage.

Lithography Limits
Today's 0.35micron process uses visible light in its production lithographic process. Intel's 0.25micron process has moved to deep ultraviolet. Research into the 0.18micron process is pushing deeper into the ultraviolet spectrum: the benefits of 193nm light is that the same optical tools and process methods can be used. Good experimental results have been obtained in the lab but tools do not currently exist for large area coverage or for a production environment. Looking forward to 0.13micron, Intel is researching extreme ultraviolet, x-rays and direct-write electron-beam techniques.

Micro 2012
There are no theoretical or practical challenges that will prevent Moore's Law being true for another 20 years at least—this is another five generations of processors. Using Moore's Law to predict into 2012 Intel should have the ability to integrate 1 billion transistors on to a production die that will be operating at 10GHz. This could result in a performance of 100,000 MIPS. This is the same increase over the Pentium® II processor as the Pentium II processor was to the 386! This prediction is so staggering that it borders on unbelievable—but our increased investment in silicon R&D has continued to produce these kinds of results. Intel sees no fundamental barriers in its path to Micro 2012, and the theoretical physical limitations of wafer fabrication technology won't be reached until the year 2017.

Impact on the PC Platform
For the first time since the original PC, we have processor performance being applied outside the primary needs of software applications. Some applications will continue to drive for maximum available performance but others will use the processor capabilities in other ways. Mobile systems are able to slow the processor down to conserve battery life and still deliver incredible performance to the user. PCs targeted at home entertainment could use the processor performance to decompress an MPEG-2 video stream in software. This reduces the cost of the PC by removing specialist decompression hardware making the system more affordable to a larger audience. The operating system can also take advantage of this increased processor performance by doing system integrity work such as virus detection, software upgrades, file consolidation and systems management as background tasks. The delivered processor performance is allowing the PC platform to evolve out of its "one-size-fits-all" paradigm into purpose-built systems which match customer requirements as shown in Figure 4. These systems are all based upon the same processor core so they all run the same operating system and support the same applications. Each system is tuned on top of this baseline capability to deliver features targeted at its respective audience. Applications are designed to be scalable across multiple platforms, and a "high-end" system will typically deliver a better user experience.

Figure 4: PC Platform Purpose-Built Systems.

New PC Platform Usage Models
An area which has taken full advantage of the increasing silicon technology is graphics. It is now possible to get amazing 2D and eye-popping 3D graphics at volume desktop price points. This is a natural evolution of the desktop platform and the term "visual computing" has been coined to describe this recent phenomenon (see last month's Platform Solutions Focus section on Visual Computing). It is now just as easy to create a full-color video clip with sounds as it used to be to create a graphic layout—and if you are trying to sell a vacation in Hawaii, then it is much more compelling to use a video with waves softly crashing in the background than to use a static graphic. The Internet is changing how people interact. When combined with visual computing capabilities, businesses and consumers will have the ability to interact "screen-to-screen," no matter where they are located.

Lower Cost Peripherals
Another area that processor performance is used to great effect is the lowering cost of attached peripherals. The modern peripheral is essentially a "sensor-on-a-wire," and any required signal processing is done by the host processor. A digital camera, for example, captures its picture on a CCD sensor and passes this raw data via a Universal Serial Bus (USB) cable into the PC platform; the host processor implements color space conversion, aspect ratio correction, scaling and interpolation, gamma correction and white balance. The software could also correct for dead-pixels in the sensor and will manage user preferences. The same technique is being employed on scanners, photo-printers, plotters and a variety of new gaming peripherals. This reduces the cost of the platform by removing specialist hardware and thus makes the system more affordable to a greater market.

Summary
The silicon technology inside the PC platform continues to evolve and deliver twice the processor capability every 18–24 months. The electronics around the processor is also evolving so that these increased capabilities can be used by the basic PC platform. The PC platform itself is diversifying into multiple usage models and application areas, all driven by customer demand. This all adds up to a better user experience, all made possible by Intel continuing to prove Moore's Law to be true.



To see and hear a webcast replay of Gordon Moore's Intel Developer Forum keynote presentation, visit the IDF web site.

For more information on the technologies and happenings at IDF, read the Top Story "PC Evolution Accelerates at Fall Intel Developer Forum" in this month's issue of Platform Solutions.

To learn more about the platform technologies Intel is driving to keep pace with Moore's Law, visit the Platforms and Technology pages in Platform Solutions.
Back to top
Feedback

* Legal Information © 1998 Intel Corporation